Back to index

Common Sense, the Turing Test, and the Quest for Real AI

Authors: Hector J. Levesque, Hector J. Levesque

Overview

This book explores the quest for real AI, focusing on the role of common sense reasoning and challenging the current emphasis on data-driven machine learning. I argue that while adaptive machine learning (AML) has achieved remarkable success in specific tasks, it neglects the core aspect of human intelligence: the ability to use background knowledge to navigate novel situations. My central thesis is that truly intelligent systems must be knowledge-based, meaning that they represent beliefs symbolically and use them for reasoning and decision-making. I revisit the original vision of AI pioneers like John McCarthy, who emphasized common sense and the importance of knowledge representation, and argue for its continued relevance. To illustrate the limitations of relying solely on training data, I introduce the concept of the “long tail,” where rare but significant events are missed by systems trained only on common cases. As an alternative to the Turing Test, I propose the Winograd Schema Challenge, a more robust test of machine understanding that requires common sense reasoning. The book also delves into the technical underpinnings of AI, discussing symbol processing as the foundation of digital computation and exploring the philosophical questions surrounding knowledge, belief, and intelligence. Finally, I address the societal implications of AI technology, arguing that the real risk lies not in superintelligence, but in granting autonomy to systems lacking common sense. The book is intended for a general audience interested in AI, including students, professionals, and anyone curious about the future of intelligent machines. It offers a critical perspective on the current state of AI, urging a return to the foundational principles of knowledge representation and reasoning to achieve true artificial intelligence. The ideas presented aim to rekindle the debate about what constitutes real AI and offer a roadmap for building truly intelligent systems. While acknowledging the advancements made in AML, the book challenges the notion that simply scaling up data and computing power will lead to human-level intelligence. Instead, it champions a knowledge-based approach, where common sense reasoning plays a central role. In a world increasingly reliant on AI, understanding these distinctions is crucial for developing safe and beneficial intelligent technologies.

Book Outline

1. What Kind of AI?

Current AI is dominated by adaptive machine learning (AML), which uses ‘big data’ to train systems. While AML excels in specific tasks, it largely ignores the role of common sense reasoning, which is central to human intelligence.

Key concept: Adaptive machine learning (AML): This dominant approach in AI focuses on training computer systems to exhibit intelligent behavior by exposing them to massive datasets. Its success relies on ‘big data,’ powerful computational techniques, and fast computers. However, it differs significantly from the original vision of AI centered around common sense.

2. The Big Puzzle

Understanding the mind requires acknowledging its complexity. Focusing too narrowly on a single scientific discipline, such as neurology or psychology, won’t uncover the entire picture. My work takes a design stance, concentrating on intelligent behavior without delving into the experience of thinking.

Key concept: The Big Puzzle Issue: Mistaking a part for the whole is a common error in complex domains like AI. It’s tempting to explain human thinking with a single theory or approach (like neurology, psychology, evolution, or language). This ignores the staggering complexity of the mind and the importance of multidisciplinary views.

3. Knowledge and Behavior

Intelligent behavior relies on background knowledge and is not simply a matter of stimulus and response. The ability to apply what we know to new situations is what distinguishes truly intelligent behavior from reflexive reactions. This is seen in how we use language, make sense of visual scenes, and plan future actions.

Key concept: Knowledge Representation Hypothesis: Intelligent behavior involves more than simple stimulus-response. We use background knowledge to navigate new situations, and this dependence on knowledge is a core aspect of intelligent behavior.

4. Making It and Faking It

The Turing Test, while groundbreaking, is vulnerable to tricks and doesn’t truly measure understanding. Informal conversation allows for too much deception, as evidenced by chatbots like ELIZA and Eugene Goostman that fool judges without possessing real-world knowledge or reasoning abilities.

Key concept: The Turing Test, while emphasizing observable behavior, is flawed because conversation is too easily faked. The Loebner competition highlights how canned responses and tricks can fool interrogators, not demonstrate true intelligence.

5. Learning with and without Experience

We learn through both experience and language. Simple words are often acquired through sensory association, while more abstract terms are learned through definitions and explanations. The ability to use language to learn more language is unique to humans and key to acquiring complex knowledge.

Key concept: Bootstrapping Language Problem: Learning words like ‘hungry’ happens through experiential association, while words like ‘incarnate’ are learned through language itself. This underscores two distinct learning mechanisms: experience and language.

6. Book Smarts and Street Smarts

Human language, specifically its use beyond immediate communication, is fundamental to our advanced technology. It enables the accumulation of scientific and mathematical knowledge across generations, which is impossible for animals limited to simpler communication.

Key concept: Advanced technology is inherently tied to language, especially its use beyond immediate communication. This allows us to accumulate knowledge and build upon the work of others across generations.

7. The Long Tail and the Limits to Training

Expertise gained through training often focuses on the most frequent situations. However, rare, unpredictable events (‘black swans’) can have significant consequences. Relying solely on training, without common sense reasoning, leaves systems vulnerable to these ‘long tail’ events.

Key concept: Long Tail Phenomenon: Focusing only on common cases of a phenomenon can lead to a flawed understanding, as rare but significant ‘black swan’ events can be missed. Training alone cannot prepare systems for all possible scenarios.

8. Symbols and Symbol Processing

Symbol processing, the manipulation of symbolic representations, is fundamental to how we think and to the operation of computers. From algebra to logic, the process of manipulating symbols according to rules allows us to draw new conclusions and perform complex calculations.

Key concept: Computation over symbols is a defining characteristic of digital computing. This manipulation of symbolic representations, whether numbers, logical formulas, or pixels in an image, underlies all digital computation.

9. Knowledge-Based Systems

True AI requires more than just mimicking behavior; it needs to be knowledge-based. This means symbolically representing beliefs and reasoning with them, like calculating with numbers, to make decisions and take actions. While the form this representation and reasoning should take is still debated, the core idea that beliefs play a causal role in behavior remains essential.

Key concept: Knowledge Representation Hypothesis: Human-level intelligence requires systems to be knowledge-based, where beliefs are represented symbolically and used for reasoning and decision-making.

10. AI Technology

The biggest risk with current AI is not a sudden ‘superintelligence takeover’ but the premature granting of autonomy to imperfectly trained systems lacking common sense. This is especially dangerous in real-world applications where unexpected situations can have disastrous consequences. The emphasis should be on building reliable and predictable systems, rather than blindly pursuing increased autonomy.

Key concept: The real risk of AI is not superintelligence, but the overreliance on imperfectly trained systems lacking common sense to operate machinery and make critical decisions autonomously.

Essential Questions

1. What is the dominant approach in current AI, and how does it differ from the original vision of AI?

Adaptive Machine Learning (AML) uses statistical techniques on massive datasets to achieve results in specific tasks like image recognition and natural language processing. It leverages ‘big data,’ fast computers, and advanced algorithms, marking a significant shift from the initial AI focus on replicating human thought processes. AML excels in specific tasks but doesn’t address general intelligence or common sense, which are central to human intelligence.

2. What is common sense, and how does it relate to the ‘Big Puzzle’ of understanding intelligence?

Common sense is the ability to use background knowledge to navigate novel or unexpected situations effectively. It’s distinct from expertise gained through training, which focuses on common cases. The Big Puzzle issue refers to mistaking a part of intelligence (like language or perception) for the whole, neglecting its complex interplay with other cognitive processes.

3. What is the Knowledge Representation Hypothesis, and why is it a central argument in the book?

The knowledge representation hypothesis proposes that human-level intelligence requires systems to be knowledge-based, meaning that beliefs are symbolically represented and used for reasoning, similar to calculations with numbers. This is controversial as it emphasizes symbolic manipulation over statistical learning, contrasting with the current dominance of AML.

4. Why is the Turing Test considered flawed, and what alternatives are proposed?

The Turing Test, while focusing on observable behavior, is flawed because conversation can be faked. Systems can be designed to manipulate symbols and give the illusion of understanding without possessing real knowledge or common sense, as seen with chatbots like ELIZA. The book argues for more robust tests of machine understanding.

5. What is the primary risk associated with current AI technology, according to the author?

The main risk of AI is not superintelligence, but the premature granting of autonomy to systems without common sense. Over-reliance on imperfectly trained systems to control machinery or make critical decisions can lead to unpredictable and potentially disastrous consequences in novel situations.

1. What is the dominant approach in current AI, and how does it differ from the original vision of AI?

Adaptive Machine Learning (AML) uses statistical techniques on massive datasets to achieve results in specific tasks like image recognition and natural language processing. It leverages ‘big data,’ fast computers, and advanced algorithms, marking a significant shift from the initial AI focus on replicating human thought processes. AML excels in specific tasks but doesn’t address general intelligence or common sense, which are central to human intelligence.

2. What is common sense, and how does it relate to the ‘Big Puzzle’ of understanding intelligence?

Common sense is the ability to use background knowledge to navigate novel or unexpected situations effectively. It’s distinct from expertise gained through training, which focuses on common cases. The Big Puzzle issue refers to mistaking a part of intelligence (like language or perception) for the whole, neglecting its complex interplay with other cognitive processes.

3. What is the Knowledge Representation Hypothesis, and why is it a central argument in the book?

The knowledge representation hypothesis proposes that human-level intelligence requires systems to be knowledge-based, meaning that beliefs are symbolically represented and used for reasoning, similar to calculations with numbers. This is controversial as it emphasizes symbolic manipulation over statistical learning, contrasting with the current dominance of AML.

4. Why is the Turing Test considered flawed, and what alternatives are proposed?

The Turing Test, while focusing on observable behavior, is flawed because conversation can be faked. Systems can be designed to manipulate symbols and give the illusion of understanding without possessing real knowledge or common sense, as seen with chatbots like ELIZA. The book argues for more robust tests of machine understanding.

5. What is the primary risk associated with current AI technology, according to the author?

The main risk of AI is not superintelligence, but the premature granting of autonomy to systems without common sense. Over-reliance on imperfectly trained systems to control machinery or make critical decisions can lead to unpredictable and potentially disastrous consequences in novel situations.

Key Takeaways

1. Current AI lacks common sense.

Current AI systems, primarily based on adaptive machine learning, excel at tasks within their training data but struggle with novel situations. Human intelligence, however, leverages background knowledge and common sense to navigate unfamiliar scenarios. This limitation of current AI makes it brittle and prone to errors when confronted with unexpected inputs or problems.

Practical Application:

When designing a chatbot for customer service, don’t just focus on training it on common queries. Implement a mechanism for it to recognize when a situation is outside its training data and escalate it to a human operator. This prevents the bot from giving inappropriate or nonsensical responses to unusual requests.

2. Rare events (‘black swans’) are important.

Rare events, or ‘black swans,’ though infrequent, can have a disproportionate impact. Systems trained only on common cases are ill-equipped to handle these anomalies, which often require common sense reasoning and knowledge beyond the training data. This principle is relevant in various fields, including finance, technology, and safety-critical systems.

Practical Application:

In product development, consider edge cases and ‘what if’ scenarios that might not be captured in typical user stories. For instance, in designing a self-driving car, think about how it should react to a sudden whiteout condition or a pedestrian behaving erratically.

3. True AI needs a knowledge-based approach.

True intelligence involves not just learning from experience but also applying existing knowledge to solve new problems. This requires representing knowledge symbolically and having mechanisms for reasoning with that knowledge, as we do with numbers in mathematics and logic. This contrasts with purely data-driven approaches that rely solely on statistical correlations without genuine understanding.

Practical Application:

If training a robot to navigate a home, ensure it has a way to deal with unexpected obstacles. For example, if it encounters a closed door it has never seen before, instead of repeatedly bumping into it, the robot should be able to try other options like looking for an alternative path, waiting for the door to open, or signaling for human assistance.

4. Autonomy without common sense is risky.

While automation can be beneficial, granting excessive autonomy to AI systems lacking common sense can be dangerous. These systems might make decisions that seem appropriate based on their training but fail to account for subtle contextual factors or rare situations requiring common sense judgment.

Practical Application:

AI systems should have clearly defined limitations and be predictable in their behavior. If an AI system for medical diagnosis encounters a case outside its expertise, it should flag it for review by a human physician rather than attempting to make a diagnosis beyond its competence.

1. Current AI lacks common sense.

Current AI systems, primarily based on adaptive machine learning, excel at tasks within their training data but struggle with novel situations. Human intelligence, however, leverages background knowledge and common sense to navigate unfamiliar scenarios. This limitation of current AI makes it brittle and prone to errors when confronted with unexpected inputs or problems.

Practical Application:

When designing a chatbot for customer service, don’t just focus on training it on common queries. Implement a mechanism for it to recognize when a situation is outside its training data and escalate it to a human operator. This prevents the bot from giving inappropriate or nonsensical responses to unusual requests.

2. Rare events (‘black swans’) are important.

Rare events, or ‘black swans,’ though infrequent, can have a disproportionate impact. Systems trained only on common cases are ill-equipped to handle these anomalies, which often require common sense reasoning and knowledge beyond the training data. This principle is relevant in various fields, including finance, technology, and safety-critical systems.

Practical Application:

In product development, consider edge cases and ‘what if’ scenarios that might not be captured in typical user stories. For instance, in designing a self-driving car, think about how it should react to a sudden whiteout condition or a pedestrian behaving erratically.

3. True AI needs a knowledge-based approach.

True intelligence involves not just learning from experience but also applying existing knowledge to solve new problems. This requires representing knowledge symbolically and having mechanisms for reasoning with that knowledge, as we do with numbers in mathematics and logic. This contrasts with purely data-driven approaches that rely solely on statistical correlations without genuine understanding.

Practical Application:

If training a robot to navigate a home, ensure it has a way to deal with unexpected obstacles. For example, if it encounters a closed door it has never seen before, instead of repeatedly bumping into it, the robot should be able to try other options like looking for an alternative path, waiting for the door to open, or signaling for human assistance.

4. Autonomy without common sense is risky.

While automation can be beneficial, granting excessive autonomy to AI systems lacking common sense can be dangerous. These systems might make decisions that seem appropriate based on their training but fail to account for subtle contextual factors or rare situations requiring common sense judgment.

Practical Application:

AI systems should have clearly defined limitations and be predictable in their behavior. If an AI system for medical diagnosis encounters a case outside its expertise, it should flag it for review by a human physician rather than attempting to make a diagnosis beyond its competence.

Memorable Quotes

Chapter 1. 15

“You throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data.”

Chapter 1. 17

“We shall therefore say that a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows.”

Chapter 2. 26

“What we need to care about is intelligent behavior, an agent making intelligent choices about what to do.”

Chapter 3. 37

“Knowledge, what you know and can bring to bear on what you are doing, is an essential component of human behavior.”

Chapter 5. 71

“It is not true that we have only one life to live; if we can read, we can live as many more lives and as many kinds of lives as we wish.”

Chapter 1. 15

“You throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data.”

Chapter 1. 17

“We shall therefore say that a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows.”

Chapter 2. 26

“What we need to care about is intelligent behavior, an agent making intelligent choices about what to do.”

Chapter 3. 37

“Knowledge, what you know and can bring to bear on what you are doing, is an essential component of human behavior.”

Chapter 5. 71

“It is not true that we have only one life to live; if we can read, we can live as many more lives and as many kinds of lives as we wish.”

Comparative Analysis

This book distinguishes itself by directly challenging the prevailing trends in AI research, particularly the overwhelming focus on adaptive machine learning (AML). While acknowledging AML’s successes, Levesque argues it’s a detour from the original quest for true AI, echoing earlier criticisms like those of Hubert Dreyfus and later, Jerry Fodor, who questioned the efficacy of purely computational approaches to replicating human thought. Unlike books like Nick Bostrom’s ‘Superintelligence’, which explore the potential dangers of future AI, Levesque downplays the singularity and focuses on the more immediate risks of systems lacking common sense. He agrees with Daniel Dennett’s emphasis on behavior as a key indicator of intelligence but questions the Turing Test’s effectiveness in distinguishing genuine understanding from clever imitation. This book’s emphasis on common sense aligns with Stuart Russell and Peter Norvig’s comprehensive AI textbook, which also discusses knowledge representation and reasoning, though in a more technical manner.

Reflection

Levesque presents a compelling argument for the importance of common sense in AI. However, his dismissal of the potential for systems to learn common sense from large datasets, while understandable given current limitations, might be overly pessimistic. The field of AI is constantly evolving, and future breakthroughs in areas like knowledge representation and reasoning could potentially bridge the gap between AML and GOFAI. Furthermore, his focus on the risks of autonomy without common sense, while valid, neglects the potential benefits of such systems in specific, well-defined domains. A nuanced approach considering both the risks and benefits of AI automation is crucial. Despite this potential bias against AML, the book’s central message regarding the importance of common sense remains relevant and thought-provoking. It serves as a crucial reminder that true AI requires more than just sophisticated pattern recognition; it needs a deeper understanding of the world and the ability to reason about it in a way that approximates human intelligence. By challenging the prevailing paradigm in AI research, Levesque’s book pushes us to rethink our approach to building intelligent systems and to prioritize the development of systems that are not only powerful but also robust, reliable, and ethically sound. This book is a valuable contribution to the ongoing discussion about the future of AI and the role of knowledge representation and reasoning in achieving genuine artificial intelligence.

Flashcards

What is Adaptive Machine Learning (AML)?

A dominant AI approach using large datasets and statistical methods to train systems for specific tasks, but lacking general intelligence and common sense.

Define ‘common sense’ in the context of AI.

The ability to use background knowledge to navigate novel situations effectively, distinguishing it from expertise developed through training for common scenarios.

What is the ‘Big Puzzle Issue’ in understanding intelligence?

Mistaking a part of intelligence (e.g., language, perception) for the whole, neglecting its complex interaction with other cognitive abilities.

Explain the Knowledge Representation Hypothesis.

The idea that truly intelligent systems must represent beliefs symbolically and use them for reasoning to guide behavior, like calculations with numbers.

Why is the Turing Test considered flawed?

Informal conversation is too easily faked. Systems can manipulate symbols to create the illusion of understanding without true knowledge.

What is the Winograd Schema Challenge?

A proposed alternative to the Turing Test that involves resolving pronoun references in carefully constructed sentences, requiring common sense reasoning.

What are ‘Black Swan’ events, and what is the ‘Long Tail Phenomenon’?

Rare but significant events that are often overlooked by systems trained on common cases, highlighting limitations of purely statistical approaches.

What is symbol processing?

Manipulation of symbolic representations according to specific rules, which underlies all digital computation, whether numerical, logical, or image-based.

What is the real risk of current AI, according to the book?

Granting too much autonomy to imperfectly trained systems lacking common sense, especially in controlling machinery or making critical decisions.

What is Adaptive Machine Learning (AML)?

A dominant AI approach using large datasets and statistical methods to train systems for specific tasks, but lacking general intelligence and common sense.

Define ‘common sense’ in the context of AI.

The ability to use background knowledge to navigate novel situations effectively, distinguishing it from expertise developed through training for common scenarios.

What is the ‘Big Puzzle Issue’ in understanding intelligence?

Mistaking a part of intelligence (e.g., language, perception) for the whole, neglecting its complex interaction with other cognitive abilities.

Explain the Knowledge Representation Hypothesis.

The idea that truly intelligent systems must represent beliefs symbolically and use them for reasoning to guide behavior, like calculations with numbers.

Why is the Turing Test considered flawed?

Informal conversation is too easily faked. Systems can manipulate symbols to create the illusion of understanding without true knowledge.

What is the Winograd Schema Challenge?

A proposed alternative to the Turing Test that involves resolving pronoun references in carefully constructed sentences, requiring common sense reasoning.

What are ‘Black Swan’ events, and what is the ‘Long Tail Phenomenon’?

Rare but significant events that are often overlooked by systems trained on common cases, highlighting limitations of purely statistical approaches.

What is symbol processing?

Manipulation of symbolic representations according to specific rules, which underlies all digital computation, whether numerical, logical, or image-based.

What is the real risk of current AI, according to the book?

Granting too much autonomy to imperfectly trained systems lacking common sense, especially in controlling machinery or making critical decisions.